5 research outputs found

    Optimal stimulation protocol in a bistable synaptic consolidation model

    Full text link
    Consolidation of synaptic changes in response to neural activity is thought to be fundamental for memory maintenance over a timescale of hours. In experiments, synaptic consolidation can be induced by repeatedly stimulating presynaptic neurons. However, the effectiveness of such protocols depends crucially on the repetition frequency of the stimulations and the mechanisms that cause this complex dependence are unknown. Here we propose a simple mathematical model that allows us to systematically study the interaction between the stimulation protocol and synaptic consolidation. We show the existence of optimal stimulation protocols for our model and, similarly to LTP experiments, the repetition frequency of the stimulation plays a crucial role in achieving consolidation. Our results show that the complex dependence of LTP on the stimulation frequency emerges naturally from a model which satisfies only minimal bistability requirements.Comment: 23 pages, 6 figure

    How single neuron properties shape chaotic dynamics and signal transmission in random neural networks

    Full text link
    While most models of randomly connected networks assume nodes with simple dynamics, nodes in realistic highly connected networks, such as neurons in the brain, exhibit intrinsic dynamics over multiple timescales. We analyze how the dynamical properties of nodes (such as single neurons) and recurrent connections interact to shape the effective dynamics in large randomly connected networks. A novel dynamical mean-field theory for strongly connected networks of multi-dimensional rate units shows that the power spectrum of the network activity in the chaotic phase emerges from a nonlinear sharpening of the frequency response function of single units. For the case of two-dimensional rate units with strong adaptation, we find that the network exhibits a state of "resonant chaos", characterized by robust, narrow-band stochastic oscillations. The coherence of stochastic oscillations is maximal at the onset of chaos and their correlation time scales with the adaptation timescale of single units. Surprisingly, the resonance frequency can be predicted from the properties of isolated units, even in the presence of heterogeneity in the adaptation parameters. In the presence of these internally-generated chaotic fluctuations, the transmission of weak, low-frequency signals is strongly enhanced by adaptation, whereas signal transmission is not influenced by adaptation in the non-chaotic regime. Our theoretical framework can be applied to other mechanisms at the level of single nodes, such as synaptic filtering, refractoriness or spike synchronization. These results advance our understanding of the interaction between the dynamics of single units and recurrent connectivity, which is a fundamental step toward the description of biologically realistic network models in the brain, or, more generally, networks of other physical or man-made complex dynamical units

    Task-dependent optimal representations for cerebellar learning

    No full text
    The cerebellar granule cell layer has inspired numerous theoretical models of neural representations that support learned behaviors, beginning with the work of Marr and Albus. In these models, granule cells form a sparse, combinatorial encoding of diverse sensorimotor inputs. Such sparse representations are optimal for learning to discriminate random stimuli. However, recent observations of dense, low-dimensional activity across granule cells have called into question the role of sparse coding in these neurons. Here, we generalize theories of cerebellar learning to determine the optimal granule cell representation for tasks beyond random stimulus discrimination, including continuous input-output transformations as required for smooth motor control. We show that for such tasks, the optimal granule cell representation is substantially denser than predicted by classical theories. Our results provide a general theory of learning in cerebellum-like systems and suggest that optimal cerebellar representations are task-dependent
    corecore